🛠️ All DevTools
Showing 1–20 of 4270 tools
Last Updated
April 22, 2026 at 04:00 PM
open-metadata/OpenMetadata
GitHub Trending[Other] OpenMetadata is a unified metadata platform for data discovery, data observability, and data governance powered by a central metadata repository, in-depth column level lineage, and seamless team collaboration.
langfuse/langfuse
GitHub Trending[Monitoring/Observability] 🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with OpenTelemetry, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
GitHub CLI now collects pseudoanonymous telemetry
Hacker News (score: 226)[CLI Tool] GitHub CLI now collects pseudoanonymous telemetry
Show HN: Gemini Plugin for Claude Code
Show HN (score: 5)[Other] Show HN: Gemini Plugin for Claude Code I built a plugin that lets Claude Code delegate work to Gemini CLI.<p>I started this after finding myself reaching for Gemini more often on long context repo work. I have been especially liking Gemini’s codebaseinvestigator for long context.<p>This is inspired by openai/codex-plugin-cc.<p>Code Review, adversarial review. Under the hood its Gemini CLI over ACP<p>Would love feedback from people using Claude Code, Gemini CLI, or ACP. I am especially curious whether this feels useful outside my own workflow.<p>Its a great combo with Opus 4.7 + Gemini 3.1 workflows
VoltAgent/awesome-agent-skills
GitHub Trending[Other] A curated collection of 1000+ agent skills from official dev teams and the community, compatible with Claude Code, Codex, Gemini CLI, Cursor, and more.
Show HN: Open Chronicle – Local Screen Memory for Claude Code and Codex CLI
Show HN (score: 5)[Other] Show HN: Open Chronicle – Local Screen Memory for Claude Code and Codex CLI I built an open source version of OpenAI Chronicle.<p>Some design decisions I made:<p>1. Local first: OCR uses Apple Vision, summarization supports local AI providers via Vercel AI SDK. Nothing leaves your computer. 2. Multiple Provider: exposes MCP so any coding agents can use it. 3. Swift menubar app: efficient, low-footprint 4. Blacklist apps: password managers, messaging apps (Slack, WhatsApp, Messenger), mail clients are on default blocklist.<p>Current Limitations: 1. Mac only. Mac-first is a feature. 2. Small local models with weak structured-output support will fail on generateObject. 3. Retrieval is LIKE-query keyword search. FTS and optional embeddings are on the list.<p>Demo video (6s): <a href="https://youtu.be/V75tnvIdovc" rel="nofollow">https://youtu.be/V75tnvIdovc</a><p>Curious what you think the right balance between exclusionlist allowlists. Happy to answer anything.
[Other] Show HN: MemFactory: Unified Inference and Training Framework for Agent Memory Memory-augmented Large Language Models (LLMs) are essential for developing capable, long-term AI agents. Recently, applying Reinforcement Learning (RL) to optimize memory operations, such as extraction, updating, and retrieval, has emerged as a highly promising research direction. However, existing implementations remain highly fragmented and task-specific, lacking a unified infrastructure to streamline the integration, training, and evaluation of these complex pipelines. To address this gap, we present MemFactory, the first unified, highly modular training and inference framework specifically designed for memory-augmented agents. Inspired by the success of unified fine-tuning frameworks like LLaMA-Factory, MemFactory abstracts the memory lifecycle into atomic, plug-and-play components, enabling researchers to seamlessly construct custom memory agents via a "Lego-like" architecture. Furthermore, the framework natively integrates Group Relative Policy Optimization (GRPO) to fine-tune internal memory management policies driven by multi-dimensional environmental rewards. MemFactory provides out-of-the-box support for recent cutting-edge paradigms, including Memory-R1, RMM, and MemAgent. We empirically validate MemFactory on the open-source MemAgent architecture using its publicly available training and evaluation data. Across the evaluation sets, MemFactory improves performance over the corresponding base models on average, with relative gains of up to 14.8%. By providing a standardized, extensible, and easy-to-use infrastructure, MemFactory significantly lowers the barrier to entry, paving the way for future innovations in memory-driven AI agents.
Show HN: gcx – The Official Grafana Cloud CLI
Show HN (score: 5)[CLI Tool] Show HN: gcx – The Official Grafana Cloud CLI Hi HN,<p>We’re excited to share gcx, a new CLI we’ve been building for Grafana Cloud.<p>With the rise of agentic coding tools like Claude Code and Codex we're building faster than ever, but these agents are often blind to what’s actually happening in production.<p>gcx brings the full power of Grafana Cloud observability to your terminal. Query production. Investigate alerts. Let the Assistant root-cause issues. Ship fixes with observability built in. Without leaving your editor. gcx also comes packaged with a skills bundle that allow agents to see and act on your production telemetry. You can ask an agent to root-cause a latency spike, and it can actually fetch the telemetry, analyze the spans, and suggest a fix—all while having the full context of your codebase.<p>Do check it out and give us feedback!<p>Github link: <a href="https://github.com/grafana/gcx" rel="nofollow">https://github.com/grafana/gcx</a>
Show HN: Almanac MCP, turn Claude Code into a Deep Research agent
Show HN (score: 8)[Other] Show HN: Almanac MCP, turn Claude Code into a Deep Research agent I am Rohan, and I have grown really frustrated with CC's search and read tools. They use Haiku to summarise all the search results, so it is really slow and often ends up being very lossy.<p>I built this MCP that you can install into your coding agents so they can actually access the web properly.<p>Right now it can:<p>- search the general web<p>- search Reddit<p>- read and scrape basically any webpage<p>Install it:<p>npx openalmanac setup<p>The MCP is completely free to use. We have also built a central store where you can contribute things you learned while exploring. If you find something useful, you can contribute it to the encyclopedia we're building at Almanac using the same MCP.
Claude Code to be removed from Anthropic's Pro plan?
Hacker News (score: 402)[Other] Claude Code to be removed from Anthropic's Pro plan? <a href="https://x.com/TheAmolAvasare/status/2046725498592722972" rel="nofollow">https://x.com/TheAmolAvasare/status/2046725498592722972</a><p><a href="https://xcancel.com/TheAmolAvasare/status/2046725498592722972" rel="nofollow">https://xcancel.com/TheAmolAvasare/status/204672549859272297...</a>
Zindex – Diagram Infrastructure for Agents
Hacker News (score: 42)[Other] Zindex – Diagram Infrastructure for Agents
Show HN: Hydra – Never stop coding when your AI CLI hits a rate limit
Show HN (score: 5)[CLI Tool] Show HN: Hydra – Never stop coding when your AI CLI hits a rate limit I built Hydra because I kept losing my flow when Claude Code hit usage limits mid-task. I would copy context, open another tool, and then re-explain everything. This would be super annoying for me.<p>Hydra wraps your AI coding CLIs (Claude Code, Codex, OpenCode, Pi, or any terminal-based tool) in a single command. It monitors terminal output for rate limit patterns, and when one provider runs out, you switch to another with one keypress. Your conversation history, git diff, and recent commits are automatically copied to your clipboard so you can paste and keep going.<p>The fallback chain is configurable. Mine goes Claude Code → OpenCode (free Gemini) → Codex → Pi (free Gemini). The free tiers alone give you ~3000 requests/day, so even after burning through paid limits you can keep working.<p>Key details: Full PTY passthrough you see the exact same TUI as running the CLI directly hydra switch from another terminal signals ALL running sessions (rate limits are account-wide) Context extraction parses Claude Code's JSONL session files for real conversation history, not just recent output Any CLI that runs in a terminal works as a provider Single Go binary, ~200 lines of core logic<p><a href="https://github.com/saadnvd1/hydra" rel="nofollow">https://github.com/saadnvd1/hydra</a><p>Nothing amazing, but wanted to share with others in case it's useful. Feel free to modify it as you see fit.
The Vercel breach: OAuth attack exposes risk in platform environment variables
Hacker News (score: 144)[Other] The Vercel breach: OAuth attack exposes risk in platform environment variables <i>Vercel April 2026 security incident</i> - <a href="https://news.ycombinator.com/item?id=47824463">https://news.ycombinator.com/item?id=47824463</a> - April 2026 (485 comments)<p><i>A Roblox cheat and one AI tool brought down Vercel's platform</i> - <a href="https://news.ycombinator.com/item?id=47844431">https://news.ycombinator.com/item?id=47844431</a> - April 2026 (145 comments)
Show HN: Daemons – we pivoted from building agents to cleaning up after them
Hacker News (score: 39)[Other] Show HN: Daemons – we pivoted from building agents to cleaning up after them For almost two years, we've been developing Charlie, a coding agent that is autonomous, cloud-based, and focused primarily on TypeScript development. During that time, the explosion in growth and development of LLMs and agents has surpassed even our initially very bullish prognosis. When we started Charlie, we were one of the only teams we knew fully relying on agents to build all of our code. We all know how that has gone — the world has caught up, but working with agents hasn't been all kittens and rainbows, especially for fast moving teams.<p>The one thing we've noticed over the last 3 months is that the more you use agents, the more work they create. Dozens of pull requests means older code gets out of date quickly. Documentation drifts. Dependencies become stale. Developers are so focused on pushing out new code that this crucial work falls through the cracks. That's why we pivoted away from agents and invented what we think is the necessary next step for AI powered software development.<p>Today, we're introducing Daemons: a new product category built for teams dealing with operational drag from agent-created output. Named after the familiar background processes from Linux, Daemons are added to your codebase by adding an .md file to your repo, and run in a set-it-and-forget-it way that will make your lives easier and accelerate any project. For teams that use Claude, Codex, Cursor, Cline, or any other agent, we think you'll really enjoy what Daemons bring to the table.
CrabTrap: An LLM-as-a-judge HTTP proxy to secure agents in production
Hacker News (score: 53)[Other] CrabTrap: An LLM-as-a-judge HTTP proxy to secure agents in production <a href="https://www.brex.com/journal/building-crabtrap-open-source" rel="nofollow">https://www.brex.com/journal/building-crabtrap-open-source</a>
Show HN: GoModel – an open-source AI gateway in Go; 44x lighter than LiteLLM
Hacker News (score: 35)[API/SDK] Show HN: GoModel – an open-source AI gateway in Go; 44x lighter than LiteLLM Hi, I’m Jakub, a solo founder based in Warsaw.<p>I’ve been building GoModel since December with a couple of contributors. It's an open-source AI gateway that sits between your app and model providers like OpenAI, Anthropic or others.<p>I built it for my startup to solve a few problems:<p><pre><code> - track AI usage and cost per client or team - switch models without changing app code - debug request flows more easily - reduce AI spendings with exact and semantic caching </code></pre> How is it different?<p><pre><code> - ~17MB docker image - LiteLLM's image is more than 44x bigger ("docker.litellm.ai/berriai/litellm:latest" ~ 746 MB on amd64) - request workflow is visible and easy to inspect - config is environment-variable-first by default </code></pre> I'm posting now partly because of the recent LiteLLM supply-chain attack. Their team handled it impressively well, but some people are looking at alternatives anyway, and GoModel is one.<p>Website: <a href="https://gomodel.enterpilot.io" rel="nofollow">https://gomodel.enterpilot.io</a><p>Any feedback is appreciated.
HKUDS/RAG-Anything
GitHub Trending[Other] "RAG-Anything: All-in-One RAG Framework"
zilliztech/claude-context
GitHub Trending[Other] Code search MCP for Claude Code. Make entire codebase the context for any coding agent.
Show HN: VidStudio, a browser based video editor that doesn't upload your files
Hacker News (score: 258)[Other] Show HN: VidStudio, a browser based video editor that doesn't upload your files Hi HN, I built VidStudio, a privacy focused video editor that runs in the browser. I tried to keep it as frictionless as possible, so there are no accounts and no uploads. Everything is persisted on your machine.<p>Some of the features: multi-track timeline, frame accurate seek, MP4 export, audio, video, image, and text tracks, and a WebGL backed canvas where available. It also works on mobile.<p>Under the hood, WebCodecs handles frame decode for timeline playback and scrubbing, which is what makes seeking responsive since decode runs on the hardware decoder when the browser supports it. FFmpeg compiled to WebAssembly handles final encode, format conversion, and anything WebCodecs does not cover. Rendering goes through Pixi.js on a WebGL canvas, with a software fallback when WebGL is not available. Projects live in IndexedDB and the heavy work runs in Web Workers so the UI stays responsive during exports.<p>Happy to answer technical questions about the tradeoffs involved in keeping the whole pipeline client-side. Any feedback welcome.<p>Link: <a href="https://vidstudio.app/video-editor" rel="nofollow">https://vidstudio.app/video-editor</a>
A type-safe, realtime collaborative Graph Database in a CRDT
Hacker News (score: 25)[Database] A type-safe, realtime collaborative Graph Database in a CRDT